187 research outputs found

    Diffuse large B-cell lymphoma: sub-classification by massive parallel quantitative RT-PCR.

    Get PDF
    Diffuse large B-cell lymphoma (DLBCL) is a heterogeneous entity with remarkably variable clinical outcome. Gene expression profiling (GEP) classifies DLBCL into activated B-cell like (ABC), germinal center B-cell like (GCB), and Type-III subtypes, with ABC-DLBCL characterized by a poor prognosis and constitutive NF-κB activation. A major challenge for the application of this cell of origin (COO) classification in routine clinical practice is to establish a robust clinical assay amenable to routine formalin-fixed paraffin-embedded (FFPE) diagnostic biopsies. In this study, we investigated the possibility of COO-classification using FFPE tissue RNA samples by massive parallel quantitative reverse transcription PCR (qRT-PCR). We established a protocol for parallel qRT-PCR using FFPE RNA samples with the Fluidigm BioMark HD system, and quantified the expression of the COO classifier genes and the NF-κB targeted-genes that characterize ABC-DLBCL in 143 cases of DLBCL. We also trained and validated a series of basic machine-learning classifiers and their derived meta classifiers, and identified SimpleLogistic as the top classifier that gave excellent performance across various GEP data sets derived from fresh-frozen or FFPE tissues by different microarray platforms. Finally, we applied SimpleLogistic to our data set generated by qRT-PCR, and the ABC and GCB-DLBCL assigned showed the respective characteristics in their clinical outcome and NF-κB target gene expression. The methodology established in this study provides a robust approach for DLBCL sub-classification using routine FFPE diagnostic biopsies in a routine clinical setting.The research in Du lab was supported by research grants (LLR10006 & LLR13006) from Leukaemia & Lymphoma Research, U.K. XX was supported by a visiting fellowship from the China Scholarship Council, Ministry of Education, P.R. China.This is the accepted manuscript. The final version is available from NPG at http://www.nature.com/labinvest/journal/v95/n1/full/labinvest2014136a.html

    Slicing-Based Artificial Intelligence Service Provisioning on the Network Edge: Balancing AI Service Performance and Resource Consumption of Data Management

    Get PDF
    Edge intelligence leverages computing resources on the network edge to provide artificial intelligence (AI) services close to network users. As it enables fast inference and distributed learning, edge intelligence is envisioned to be an important component of 6G networks. In this article, we investigate AI service provisioning for supporting edge intelligence. First, we present the features and requirements of AI services. Then we introduce AI service data management and customize network slicing for AI services. Specifically, we propose a novel resource-pooling method to regularize service data exchange within the network edge while allocating network resources for AI services. Using this method, network resources can be properly allocated to network slices to fulfill AI service requirements. A trace-driven case study demonstrates that the proposed method can allow network slicing to satisfy diverse AI service performance requirements via the flexible selection of resource-pooling policies. In this study, we illustrate the necessity, challenge, and potential of AI service provisioning on the network edge and provide insights into resource management for AI services

    User Dynamics-Aware Edge Caching and Computing for Mobile Virtual Reality

    Full text link
    In this paper, we present a novel content caching and delivery approach for mobile virtual reality (VR) video streaming. The proposed approach aims to maximize VR video streaming performance, i.e., minimizing video frame missing rate, by proactively caching popular VR video chunks and adaptively scheduling computing resources at an edge server based on user and network dynamics. First, we design a scalable content placement scheme for deciding which video chunks to cache at the edge server based on tradeoffs between computing and caching resource consumption. Second, we propose a machine learning-assisted VR video delivery scheme, which allocates computing resources at the edge server to satisfy video delivery requests from multiple VR headsets. A Whittle index-based method is adopted to reduce the video frame missing rate by identifying network and user dynamics with low signaling overhead. Simulation results demonstrate that the proposed approach can significantly improve VR video streaming performance over conventional caching and computing resource scheduling strategies.Comment: 38 pages, 13 figures, single column double spaced, published in IEEE Journal of Selected Topics in Signal Processin

    Digital Twin-Driven Computing Resource Management for Vehicular Networks

    Full text link
    This paper presents a novel approach for computing resource management of edge servers in vehicular networks based on digital twins and artificial intelligence (AI). Specifically, we construct two-tier digital twins tailored for vehicular networks to capture networking-related features of vehicles and edge servers. By exploiting such features, we propose a two-stage computing resource allocation scheme. First, the central controller periodically generates reference policies for real-time computing resource allocation according to the network dynamics and service demands captured by digital twins of edge servers. Second, computing resources of the edge servers are allocated in real time to individual vehicles via low-complexity matching-based allocation that complies with the reference policies. By leveraging digital twins, the proposed scheme can adapt to dynamic service demands and vehicle mobility in a scalable manner. Simulation results demonstrate that the proposed digital twin-driven scheme enables the vehicular network to support more computing tasks than benchmark schemes.Comment: 6 pages, 4 figures, accepted by 2022 IEEE GLOBECO
    corecore